Goto

Collaborating Authors

 field theory



On the Effect of Regularization on Nonparametric Mean-Variance Regression

Wong-Toi, Eliot, Boyd, Alex, Fortuin, Vincent, Mandt, Stephan

arXiv.org Machine Learning

Uncertainty quantification is vital for decision-making and risk assessment in machine learning. Mean-variance regression models, which predict both a mean and residual noise for each data point, provide a simple approach to uncertainty quantification. However, overparameterized mean-variance models struggle with signal-to-noise ambiguity, deciding whether prediction targets should be attributed to signal (mean) or noise (variance). At one extreme, models fit all training targets perfectly with zero residual noise, while at the other, they provide constant, uninformative predictions and explain the targets as noise. We observe a sharp phase transition between these extremes, driven by model regularization. Empirical studies with varying regularization levels illustrate this transition, revealing substantial variability across repeated runs. To explain this behavior, we develop a statistical field theory framework, which captures the observed phase transition in alignment with experimental results. This analysis reduces the regularization hyperparameter search space from two dimensions to one, significantly lowering computational costs. Experiments on UCI datasets and the large-scale ClimSim dataset demonstrate robust calibration performance, effectively quantifying predictive uncertainty.


Fermions and Supersymmetry in Neural Network Field Theories

Frank, Samuel, Halverson, James, Maiti, Anindita, Ruehle, Fabian

arXiv.org Artificial Intelligence

We introduce fermionic neural network field theories via Grassmann-valued neural networks. Free theories are obtained by a generalization of the Central Limit Theorem to Grassmann variables. This enables the realization of the free Dirac spinor at infinite width and a four fermion interaction at finite width. Yukawa couplings are introduced by breaking the statistical independence of the output weights for the fermionic and bosonic fields. A large class of interacting supersymmetric quantum mechanics and field theory models are introduced by super-affine transformations on the input that realize a superspace formalism.


Is Phase Really Needed for Weakly-Supervised Dereverberation ?

Rodrigues, Marius, Bahrman, Louis, Badeau, Roland, Richard, Gaël

arXiv.org Machine Learning

In unsupervised or weakly-supervised approaches for speech dereverberation, the target clean (dry) signals are considered to be unknown during training. In that context, evaluating to what extent information can be retrieved from the sole knowledge of reverberant (wet) speech becomes critical. This work investigates the role of the reverberant (wet) phase in the time-frequency domain. Based on Statistical Wave Field Theory, we show that late reverberation perturbs phase components with white, uniformly distributed noise, except at low frequencies. Consequently, the wet phase carries limited useful information and is not essential for weakly supervised dereverberation. To validate this finding, we train dereverberation models under a recent weak supervision framework and demonstrate that performance can be significantly improved by excluding the reverberant phase from the loss function.




Scalable Quantum State Preparation via Large-Language-Model-Driven Discovery

Cao, Qing-Hong, Hou, Zong-Yue, Li, Ying-Ying, Liu, Xiaohui, Song, Zhuo-Yang, Zhang, Liang-Qi, Zhang, Shutao, Zhao, Ke

arXiv.org Artificial Intelligence

Efficient quantum state preparation remains a central challenge in first-principles quantum simulations of dynamics in quantum field theories, where the Hilbert space is intrinsically infinite-dimensional. Here, we introduce a large language model (LLM)-assisted framework for quantum-circuit design that systematically scales state-preparation circuits to large lattice volumes. Applied to a 1+1d XY spin chain, the LLM autonomously discovers a compact 4-parameter circuit that captures boundary-induced symmetry breaking with sub-percent energy deviation, enabling successful validation on the \texttt{Zuchongzhi} quantum processor. Guided by this insight, we extend the framework to 2+1d quantum field theories, where scalable variational ansätze have remained elusive. For a scalar field theory, the search yields a symmetry-preserving, 3-parameter shallow-depth ansatz whose optimized parameters converge to size-independent constants for lattices $n \ge 4$, providing, to our knowledge, the first scalable ansatz for this class of 2+1d models. Our results establish a practical route toward AI-assisted, human-guided discovery in quantum simulation.


Bulk-boundary decomposition of neural networks

Lee, Donghee, Lee, Hye-Sung, Yi, Jaeok

arXiv.org Artificial Intelligence

Department of Physics, Korea Advanced Institute of Science and Technology, Daejeon 34141, Korea (Dated: November 2025) We present the bulk-boundary decomposition as a new framework for understanding the training dynamics of deep neural networks. Starting from the stochastic gradient descent formulation, we show that the Lagrangian can be reorganized into a data-independent bulk term and a data-dependent boundary term. The bulk captures the intrinsic dynamics set by network architecture and activation functions, while the boundary reflects stochastic interactions from training samples at the input and output layers. As a natural extension, we develop a field-theoretic formulation of neural dynamics based on this decomposition. Introduction-- Deep neural networks have achieved remarkable empirical success across diverse domains, yet the fundamental principles governing their learning dynamics remain unclear [1-3].


Viability of perturbative expansion for quantum field theories on neurons

Sen, Srimoyee, Vaidya, Varun

arXiv.org Artificial Intelligence

Accelerated progress in machine learning (ML) over the past decade has had significant impact across many research domains, including physics, and has motivated substantial interdisciplinary work. At the intersection of physics and machine learning, two prominent practical questions have emerged: 1. Can techniques from statistical mechanics and the path integral formulation of quantum field theory (QFT) help us build a theoretical understanding of how neural networks learn? 2. Can neural networks be used to facilitate computations in quantum field theory? These two questions are deeply interrelated, and will motivate the questions we explore in this work. The second question itself splits naturally into two subcategories: (a) applied machine learning for physics problems, and (b) the theoretical interplay between machine learning and QFT techniques. The area of applied ML to physics has already seen considerable progress.